Mining Emotional Features of Movies

نویسندگان

  • Yang Liu
  • Zhonglei Gu
  • Yu Zhang
  • Yan Liu
چکیده

In this paper, we present the algorithm designed for mining emotional features of movies. The algorithm dubbed Arousal-Valence Discriminant Preserving Embedding (AVDPE) is proposed to extract the intrinsic features embedded in movies that are essentially differentiating in both arousal and valence directions. After dimensionality reduction, we use the neural network and support vector regressor to make the final prediction. Experimental results show that the extracted features can capture most of the discriminant information in movie emotions.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Connotative Feature Extraction For Movie Recommendation

It is difficult to assess the emotions subject to the emotional responses to the content of the film by exploring the film connotative properties. Connotation is used to represent the emotions described by the audiovisual descriptors so that it predicts the emotional reaction of user. The connotative features can be used for the recommendation of movies. There are various methodologies for the ...

متن کامل

A Linked Data-Based Decision Tree Classifier to Review Movies

In this paper, we describe our contribution to the 2015 Linked Data Mining Challenge. The proposed task is concerned with the prediction of review of movies as “good” or “bad”, as does Metacritic website based on critics’ reviews. First we describe the sources used to build the training data. Although, several sources provide data about movies on the Web in different formats including RDF, data...

متن کامل

RUC at MediaEval 2016 Emotional Impact of Movies Task: Fusion of Multimodal Features

In this paper, we present our approaches for the Mediaeval Emotional Impact of Movies Task. We extract features from multiple modalities including audio, image and motion modalities. SVR and Random Forest are used as our regression models and late fusion is applied to fuse different modalities. Experimental results show that the multimodal late fusion is beneficial to predict global affects and...

متن کامل

MIC-TJU in MediaEval 2017 Emotional Impact of Movies Task

To predict the emotional impact and fear of movies, we propose a framework which employs four audio-visual features. In particular, we utilize the features extracted by the methods of motion keypoint trajectory and convolutional neural networks to depict the visual information, and extract a global and a local audio features to describe the audio cues. The early fusion strategy is employed to c...

متن کامل

THU-HCSI at MediaEval 2016: Emotional Impact of Movies Task

In this paper we describe our team’s approach to MediaEval 2016 Challenge “Emotional Impact of Movies”. Except for the baseline features, we extract audio features and image features from video clips. We deploy Convolutional Neural Network (CNN) to extract image features and use OpenSMILE toolbox to extract audio ones. We also study multi-scale approach at different levels aiming at the continu...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016